Skip to content

Comments

[pyTorch] Replace the make_empty implementation to use C++ implementation#2666

Open
ptrendx wants to merge 5 commits intoNVIDIA:mainfrom
ptrendx:pr_unify_make_empty
Open

[pyTorch] Replace the make_empty implementation to use C++ implementation#2666
ptrendx wants to merge 5 commits intoNVIDIA:mainfrom
ptrendx:pr_unify_make_empty

Conversation

@ptrendx
Copy link
Member

@ptrendx ptrendx commented Feb 10, 2026

Description

This PR unifies the implementation of the QuantizedTensor creation by using the C++ implementation of the create_tensor.

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • Replaced the Python implementations of the make_empty with the calls to C++ create_tensor

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

@ptrendx ptrendx requested a review from negvet February 10, 2026 00:17
@ptrendx
Copy link
Member Author

ptrendx commented Feb 10, 2026

/te-ci L1 pytorch

1 similar comment
@ptrendx
Copy link
Member Author

ptrendx commented Feb 10, 2026

/te-ci L1 pytorch

"""Construct quantized tensor with uninitialized data"""
raise NotImplementedError(
f"{self.__class__.__name__} class does not implement make_empty function, "
"required for construction of unintialized quantized tensor"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This clear NotImplementedError is beneficial for custom quantizers that do not override make_empty().
Now, if custom quantizer does not have make_empty(), then it will fail at the C++ convert_quantizer, because there is no registered C++ converter. C++ failure with NVTE_ERROR("Unexpected type for quantizer") is not as clear as NotImplementedError.

What about making C++ error more clear or even better add a check at base Quantizer.make_empty:

def make_empty(...):
    if getattr(self, "custom", False):
        raise NotImplementedError(
            f"{self.__class__.__name__} does not implement make_empty"
        )
    # ... existing C++ path ...

ptrendx and others added 4 commits February 18, 2026 17:27
known quantizers

Signed-off-by: Przemek Tredak <ptredak@nvidia.com>
Signed-off-by: Przemek Tredak <ptredak@nvidia.com>
Signed-off-by: Przemek Tredak <ptredak@nvidia.com>
@ptrendx ptrendx force-pushed the pr_unify_make_empty branch from 98f9681 to 6be430a Compare February 19, 2026 01:27
Signed-off-by: Przemek Tredak <ptredak@nvidia.com>
@ptrendx ptrendx marked this pull request as ready for review February 19, 2026 22:41
@ptrendx
Copy link
Member Author

ptrendx commented Feb 19, 2026

/te-ci pytorch L1

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 19, 2026

Greptile Summary

Unified quantized tensor creation by migrating Python make_empty implementations to a single C++ create_tensor interface, removing ~287 lines of duplicated Python code across 4 quantizer types while adding device and pin_memory parameter support.

Key changes:

  • Added device and pin_memory parameters to all create_tensor C++ method signatures
  • Created new create_empty_quantized_tensor function exposed via pybind11
  • Replaced abstract Quantizer.make_empty() with concrete implementation calling C++ backend
  • Removed redundant Python implementations from Float8Quantizer, Float8CurrentScalingQuantizer, Float8BlockQuantizer, MXFP8Quantizer, and NVFP4Quantizer
  • Added device string-to-torch.device conversion in Python wrapper for better API ergonomics

Confidence Score: 5/5

  • This PR is safe to merge with minimal risk
  • Well-structured refactoring that consolidates duplicate code into a single C++ implementation without changing functionality. The changes are straightforward, removing ~287 lines of duplicated Python code while properly handling device and pin_memory parameters. Device string handling was correctly added to prevent type errors.
  • No files require special attention

Important Files Changed

Filename Overview
transformer_engine/pytorch/csrc/extensions/cast.cpp Implemented create_empty_quantized_tensor function that calls quantizer_cpp->create_tensor with device and pin_memory parameters
transformer_engine/pytorch/csrc/quantizer.cpp Updated all quantizer create_tensor implementations to accept and use device and pin_memory parameters for tensor allocation
transformer_engine/pytorch/quantized_tensor.py Replaced abstract make_empty with concrete implementation calling C++ create_empty_quantized_tensor, added device string handling
transformer_engine/pytorch/tensor/float8_tensor.py Removed 83 lines of Python make_empty implementations for Float8Quantizer and Float8CurrentScalingQuantizer
transformer_engine/pytorch/tensor/nvfp4_tensor.py Removed 84 lines of Python make_empty implementation for NVFP4Quantizer

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[Python: quantizer.make_empty] --> B[tex.create_empty_quantized_tensor]
    B --> C[C++: create_empty_quantized_tensor]
    C --> D[convert_quantizer]
    C --> E[GetTransformerEngineDType]
    D --> F[quantizer_cpp->create_tensor]
    E --> F
    F --> G{Quantizer Type}
    G --> H[Float8Quantizer::create_tensor]
    G --> I[Float8BlockQuantizer::create_tensor]
    G --> J[MXFP8Quantizer::create_tensor]
    G --> K[NVFP4Quantizer::create_tensor]
    G --> L[NoneQuantizer::create_tensor]
    H --> M[Create Float8Tensor with device/pin_memory]
    I --> N[Create Float8BlockwiseQTensor with device/pin_memory]
    J --> O[Create MXFP8Tensor with device/pin_memory]
    K --> P[Create NVFP4Tensor with device/pin_memory]
    L --> Q[Create unquantized Tensor with device/pin_memory]
    M --> R[Return QuantizedTensor]
    N --> R
    O --> R
    P --> R
    Q --> R
    R --> S[Python: Apply requires_grad if needed]
Loading

Last reviewed commit: 9cad6d0

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants